Definition: Let V be a vector space. A (finite) set of vectors S={v1​,v2​,…,vn​} is called a basis set for V iff
Span(S)=V
S is Linearly Independent
A (finite dimensional) linear space V has many bases. All bases of a linear space have the same number of elements. This number is called the dimension of the linear space.
Example: V=R2, consider the two bases:
S1​={[01​],[10​]​}
S1​ is linearly independent set since c1​[01​]+c2​[10​]=[00​] implies c1​=c2​=0, ∀c1​,c2​∈R(F).
Span(S1​)=V=R2
Hence S1​ is a basis for V.
S2​={[11​],[23​]​}
S2​ is linearly independent set since c1​[11​]+c2​[23​]=[00​] implies c1​=c2​=0, ∀c1​,c2​∈R(F).
Span(S2​)=V=R2
Example: V=R2, and F=R, consider the base: B={[10​],[11​]​}y=[23​][y]B​=?
Solution:
B is linearly independent set since c1​[10​]+c2​[11​]=[00​] implies c1​=c2​=0, ∀c1​,c2​∈R(F).
Span(B)=V=R2
[y]B​=[c1​c2​​]
y=c1​[10​]+c2​[11​]=[c1​+c2​c2​​]
c1​+c2​=2 and c2​=3
c1​=−1 and c2​=3
[y]B​=[−13​]
Example: V=Span{cos(t),sin(t)}, and F=R, consider the base: B={cos(t),sin(t)​}y=cos(t−3π​)[y]B​=?
Solution:
B is linearly independent set since c1​cos(t)+c2​sin(t)=0 implies c1​=c2​=0, ∀c1​,c2​∈R(F).
Claim: For a given basis B, the representation [y]B​ of a vector y is unique.
Proof: By contradiction.
Assume [y]B​=[c1​c2​​] and [y]B​=[d1​d2​​]
Then y=c1​b1​+c2​b2​=d1​b1​+d2​b2​ c1​b1​+c2​b2​−d1​b1​−d2​b2​=0 (c1​−d1​)b1​+(c2​−d2​)b2​=0
Since B is linearly independent, c1​−d1​=0 and c2​−d2​=0
Hence c1​=d1​ and c2​=d2​â–
Remark: The representation of a vector y in a basis B is unique. The representation of a vector y in a basis B is called the coordinate vector of y with respect to B.
Ordered Basis
Definition: Let V be a vector space. An ordered set of basis vectors S={v1​,v2​,…,vn​} is called an ordered basis for V.
If y=(x1​,x2​,...,xn​) is an ordered basis for V, then every vector x∈V can be written as a linear combination of the basis vectors as follows:
Theorem: Let V be an n-dimensional vector space over R. Let B1​ and B2​ be two bases for V. Then there exists a unique n×n real invertible matrix P such that [x]B2​​=P[x]B1​​ for all x∈V.
Proof: By construction.
Example: Consider V= polynomials of degree ≤2 with coefficients in R and the bases: B1​={1,t−1,(t−1)2​} B2​={1,t,t2​}
Find the matrix P such that [x]B1​​=P[x]B2​​ for all x∈V.
Solution:
[x]B1​​=⎣⎡​c1​c2​c3​​⎦⎤​ and [x]B2​​=⎣⎡​d1​d2​d3​​⎦⎤​
Example: Show that Matrix P is invertible.  Proof: Let V be an n-dimensional vector space over Rn×n. B1​ and B2​ are bases for V. A vector v holds,